9,506 research outputs found

    Confidence Intervals for Data-Driven Inventory Policies with Demand Censoring

    Get PDF
    We revisit the classical dynamic inventory management problem of Scarf (1959b) from the perspective of a decision-maker who has n historical selling seasons of data and must make ordering decisions for the upcoming season. We develop a nonparametric estimation procedure for the (*S; s*) policy that is consistent, then characterize the finite-sample properties of the estimated (*S; s*) levels by deriving their asymptotic confidence intervals. We also consider having at least some of the past selling seasons of data censored from the absence of backlogging, and show that the intuitive procedure of first correcting for censoring in the demand data yields inconsistent estimates. We then show how to correctly use the censored data to obtain consistent estimates and derive asymptotic confidence intervals for this policy using Stein’s method. We further show the confidence intervals can be used to effectively bound the difference between the expected total cost of an estimated policy and that of the optimal policy. We validate our results with extensive computations on simulated data. Our results extend to the repeated newsvendor problem and the base-stock policy problem by appropriate parameter choices

    The Big Data Newsvendor: Practical Insights from Machine Learning

    Get PDF
    We investigate the data-driven newsvendor problem when one has n observations of p features related to the demand as well as historical demand data. Rather than a two-step process of first estimating a demand distribution then optimizing for the optimal order quantity, we propose solving the “Big Data” newsvendor problem via single step machine learning algorithms. Specifically, we propose algorithms based on the Empirical Risk Minimization (ERM) principle, with and without regularization, and an algorithm based on Kernel-weights Optimization (KO). The ERM approaches, equivalent to high-dimensional quantile regression, can be solved by convex optimization problems and the KO approach by a sorting algorithm. We analytically justify the use of features by showing that their omission yields inconsistent decisions. We then derive finite-sample performance bounds on the out-of-sample costs of the feature-based algorithms, which quantify the effects of dimensionality and cost parameters. Our bounds, based on algorithmic stability theory, generalize known analyses for the newsvendor problem without feature information. Finally, we apply the feature-based algorithms for nurse staffing in a hospital emergency room using a data set from a large UK teaching hospital and find that (i) the best ERM and KO algorithms beat the best practice benchmark by 23% and 24% respectively in the out-of-sample cost, and (ii) the best KO algorithm is faster than the best ERM algorithm by three orders of magnitude and the best practice benchmark by two orders of magnitude

    Data taking strategy for the phase study in ψK+K\psi^{\prime} \to K^+K^-

    Full text link
    The study of the relative phase between strong and electromagnetic amplitudes is of great importance for understanding the dynamics of charmonium decays. The information of the phase can be obtained model-independently by fitting the scan data of some special decay channels, one of which is ψK+K\psi^{\prime} \to K^{+}K^{-}. To find out the optimal data taking strategy for a scan experiment in the measurement of the phase in ψK+K\psi^{\prime} \to K^{+} K^{-}, the minimization process is analyzed from a theoretical point of view. The result indicates that for one parameter fit, only one data taking point in the vicinity of a resonance peak is sufficient to acquire the optimal precision. Numerical results are obtained by fitting simulated scan data. Besides the results related to the relative phase between strong and electromagnetic amplitudes, the method is extended to analyze the fits of other resonant parameters, such as the mass and the total decay width of ψ\psi^{\prime}.Comment: 13 pages, 7 figure

    Sub-6GHz 4G/5G Conformal Glasses Antennas

    Full text link
    © 2013 IEEE. The difficulty of antenna design applied to glasses is that the structure of glasses is too single, and the space available for antenna design is greatly limited. In this background, the integrated design of 4G antennas and 5G antennas applied to glasses is proposed in this paper. The most important highlight of this design is that it makes full use of the limited three-dimensional space structure provided by glasses and achieves the perfect combination of the antenna and glasses in the physical structure. Specifically, two antennas for 4G communication are arranged on two glasses frames, and four antennas for 5G communication are arranged on two glasses legs. In this way, we can make full use of the space provided by the glasses to design antennas and ensure that there is a certain distance between the 4G antennas and 5G antennas so that the performance of both 4G antennas and 5G antennas can be guaranteed. The 4G antenna consists of a loop structure printed on the frame and leg of the glasses and a parasitic branch strip printed on the front of the leg of the glasses. The resonance modes of the 4G antenna are mainly loop, monopole, and dipole modes, which can cover two 4G bands of 0.824-0.96 GHz and 1.71-2.69 GHz. Each 5G antenna mainly comes from the open slot mode etched on the metal ground surface of an FR4 substrate of the glasses leg. In addition, the slot antennas operate in two 5G bands of 3.3-3.6 GHz and 4.8-5.0 GHz. Finally, the glasses and the antennas are fabricated based on FR4 substrates and measured. The measured results show that the proposed antennas perform well and have the potential to be used in 4G/5G communications through glasses

    Dynamic Procurement of New Products with Covariate Information: The Residual Tree Method

    Get PDF
    Problem definition: We study the practice-motivated problem of dynamically procuring a new, short life-cycle product under demand uncertainty. The firm does not know the demand for the new product but has data on similar products sold in the past, including demand histories and covariate information such as product characteristics. Academic/practical relevance: The dynamic procurement problem has long attracted academic and practitioner interest, and we solve it in an innovative data-driven way with proven theoretical guarantees. This work is also the first to leverage the power of covariate data in solving this problem. Methodology:We propose a new, combined forecasting and optimization algorithm called the Residual Tree method, and analyze its performance via epi-convergence theory and computations. Our method generalizes the classical Scenario Tree method by using covariates to link historical data on similar products to construct demand forecasts for the new product. Results: We prove, under fairly mild conditions, that the Residual Tree method is asymptotically optimal as the size of the data set grows. We also numerically validate the method for problem instances derived using data from the global fashion retailer Zara. We find that ignoring covariate information leads to systematic bias in the optimal solution, translating to a 6–15% increase in the total cost for the problem instances under study. We also find that solutions based on trees using just 2–3 branches per node, which is common in the existing literature, are inadequate, resulting in 30–66% higher total costs compared with our best solution. Managerial implications: The Residual Tree is a new and generalizable approach that uses past data on similar products to manage new product inventories. We also quantify the value of covariate information and of granular demand modeling

    Modeling stormwater management at the city district level in response to changes in land use and low impact development

    Get PDF
    Mitigating the impact of increasing impervious surfaces on stormwater runoff by low impact development (LID) is currently being widely promoted at site and local scales. In turn, the series of distributed LID implementations may produce cumulative effects and benefit the stormwater management at larger regional scales. However, the potential of multiple LID implementations to mitigate the broad-scale impacts of urban stormwater is not yet fully understood, particularly among different design strategies to reduce directly connected impervious areas (DCIA). In this study, the hydrological responses of stormwater runoff characteristics to four different land use conversion scenarios at the city scale were explored using GIS-based Stormwater Management Model (SWMM). Model simulation results confirmed the effectiveness of LID controls; however, they also indicated that even with the most beneficial scenarios hydrological performance of developed areas was still not yet up to the pre-development level, especially with pronounced changes from pervious to impervious land

    Distinguishing mixed quantum states: Minimum-error discrimination versus optimum unambiguous discrimination

    Full text link
    We consider two different optimized measurement strategies for the discrimination of nonorthogonal quantum states. The first is conclusive discrimination with a minimum probability of inferring an erroneous result, and the second is unambiguous, i. e. error-free, discrimination with a minimum probability of getting an inconclusive outcome, where the measurement fails to give a definite answer. For distinguishing between two mixed quantum states, we investigate the relation between the minimum error probability achievable in conclusive discrimination, and the minimum failure probability that can be reached in unambiguous discrimination of the same two states. The latter turns out to be at least twice as large as the former for any two given states. As an example, we treat the case that the state of the quantum system is known to be, with arbitrary prior probability, either a given pure state, or a uniform statistical mixture of any number of mutually orthogonal states. For this case we derive an analytical result for the minimum probability of error and perform a quantitative comparison to the minimum failure probability.Comment: Replaced by final version, accepted for publication in Phys. Rev. A. Revtex4, 6 pages, 3 figure

    Stoner gap in the superconducting ferromagnet UGe2

    Full text link
    We report the temperature (TT) dependence of ferromagnetic Bragg peak intensities and dc magnetization of the superconducting ferromagnet UGe2 under pressure (PP). We have found that the low-TT behavior of the uniform magnetization can be explained by a conventional Stoner model. A functional analysis of the data produces the following results: The ferromagnetic state below a critical pressure can be understood as the perfectly polarized state, in which heavy quasiparticles occupy only majority spin bands. A Stoner gap Δ(P)\Delta(P) decreases monotonically with increasing pressure and increases linearly with magnetic field. We show that the present analysis based on the Stoner model is justified by a consistency check, i.e., comparison of density of states at the Fermi energy deduced from the analysis with observed electronic specific heat coeffieients. We also argue the influence of the ferromagnetism on the superconductivity.Comment: 5 pages, 4 figures. to be published in Phys. Rev.
    corecore